8 research outputs found

    Detection and prediction problems with applications in personalized health care

    Full text link
    The United States health-care system is considered to be unsustainable due to its unbearably high cost. Many of the resources are spent on acute conditions rather than aiming at preventing them. Preventive medicine methods, therefore, are viewed as a potential remedy since they can help reduce the occurrence of acute health episodes. The work in this dissertation tackles two distinct problems related to the prevention of acute disease. Specifically, we consider: (1) early detection of incorrect or abnormal postures of the human body and (2) the prediction of hospitalization due to heart related diseases. The solution to the former problem could be used to prevent people from unexpected injuries or alert caregivers in the event of a fall. The latter study could possibly help improve health outcomes and save considerable costs due to preventable hospitalizations. For body posture detection, we place wireless sensor nodes on different parts of the human body and use the pairwise measurements of signal strength corresponding to all sensor transmitter/receiver pairs to estimate body posture. We develop a composite hypothesis testing approach which uses a Generalized Likelihood Test (GLT) as the decision rule. The GLT distinguishes between a set of probability density function (pdf) families constructed using a custom pdf interpolation technique. The GLT is compared with the simple Likelihood Test and Multiple Support Vector Machines. The measurements from the wireless sensor nodes are highly variable and these methods have different degrees of adaptability to this variability. Besides, these methods also handle multiple observations differently. Our analysis and experimental results suggest that GLT is more accurate and suitable for the problem. For hospitalization prediction, our objective is to explore the possibility of effectively predicting heart-related hospitalizations based on the available medical history of the patients. We extensively explored the ways of extracting information from patients' Electronic Health Records (EHRs) and organizing the information in a uniform way across all patients. We applied various machine learning algorithms including Support Vector Machines, AdaBoost with Trees, and Logistic Regression adapted to the problem at hand. We also developed a new classifier based on a variant of the likelihood ratio test. The new classifier has a classification performance competitive with those more complex alternatives, but has the additional advantage of producing results that are more interpretable. Following this direction of increasing interpretability, which is important in the medical setting, we designed a new method that discovers hidden clusters and, at the same time, makes decisions. This new method introduces an alternating clustering and classification approach with guaranteed convergence and explicit performance bounds. Experimental results with actual EHRs from the Boston Medical Center demonstrate prediction rate of 82% under 30% false alarm rate, which could lead to considerable savings when used in practice

    Explaining Machine Learning Classifiers through Diverse Counterfactual Explanations

    Full text link
    Post-hoc explanations of machine learning models are crucial for people to understand and act on algorithmic predictions. An intriguing class of explanations is through counterfactuals, hypothetical examples that show people how to obtain a different prediction. We posit that effective counterfactual explanations should satisfy two properties: feasibility of the counterfactual actions given user context and constraints, and diversity among the counterfactuals presented. To this end, we propose a framework for generating and evaluating a diverse set of counterfactual explanations based on determinantal point processes. To evaluate the actionability of counterfactuals, we provide metrics that enable comparison of counterfactual-based methods to other local explanation methods. We further address necessary tradeoffs and point to causal implications in optimizing for counterfactuals. Our experiments on four real-world datasets show that our framework can generate a set of counterfactuals that are diverse and well approximate local decision boundaries, outperforming prior approaches to generating diverse counterfactuals. We provide an implementation of the framework at https://github.com/microsoft/DiCE.Comment: 13 page

    Predicting diabetes-related hospitalizations based on electronic health records

    Full text link
    OBJECTIVE: To derive a predictive model to identify patients likely to be hospitalized during the following year due to complications attributed to Type II diabetes. METHODS: A variety of supervised machine learning classification methods were tested and a new method that discovers hidden patient clusters in the positive class (hospitalized) was developed while, at the same time, sparse linear support vector machine classifiers were derived to separate positive samples from the negative ones (non-hospitalized). The convergence of the new method was established and theoretical guarantees were proved on how the classifiers it produces generalize to a test set not seen during training. RESULTS: The methods were tested on a large set of patients from the Boston Medical Center - the largest safety net hospital in New England. It is found that our new joint clustering/classification method achieves an accuracy of 89% (measured in terms of area under the ROC Curve) and yields informative clusters which can help interpret the classification results, thus increasing the trust of physicians to the algorithmic output and providing some guidance towards preventive measures. While it is possible to increase accuracy to 92% with other methods, this comes with increased computational cost and lack of interpretability. The analysis shows that even a modest probability of preventive actions being effective (more than 19%) suffices to generate significant hospital care savings. CONCLUSIONS: Predictive models are proposed that can help avert hospitalizations, improve health outcomes and drastically reduce hospital expenditures. The scope for savings is significant as it has been estimated that in the USA alone, about $5.8 billion are spent each year on diabetes-related hospitalizations that could be prevented.Accepted manuscrip

    Predicting Chronic Disease Hospitalizations from Electronic Health Records: An Interpretable Classification Approach

    Full text link
    Urban living in modern large cities has significant adverse effects on health, increasing the risk of several chronic diseases. We focus on the two leading clusters of chronic disease, heart disease and diabetes, and develop data-driven methods to predict hospitalizations due to these conditions. We base these predictions on the patients' medical history, recent and more distant, as described in their Electronic Health Records (EHR). We formulate the prediction problem as a binary classification problem and consider a variety of machine learning methods, including kernelized and sparse Support Vector Machines (SVM), sparse logistic regression, and random forests. To strike a balance between accuracy and interpretability of the prediction, which is important in a medical setting, we propose two novel methods: K-LRT, a likelihood ratio test-based method, and a Joint Clustering and Classification (JCC) method which identifies hidden patient clusters and adapts classifiers to each cluster. We develop theoretical out-of-sample guarantees for the latter method. We validate our algorithms on large datasets from the Boston Medical Center, the largest safety-net hospital system in New England

    Guidelines for the use and interpretation of assays for monitoring autophagy (4th edition)

    No full text
    In 2008, we published the first set of guidelines for standardizing research in autophagy. Since then, this topic has received increasing attention, and many scientists have entered the field. Our knowledge base and relevant new technologies have also been expanding. Thus, it is important to formulate on a regular basis updated guidelines for monitoring autophagy in different organisms. Despite numerous reviews, there continues to be confusion regarding acceptable methods to evaluate autophagy, especially in multicellular eukaryotes. Here, we present a set of guidelines for investigators to select and interpret methods to examine autophagy and related processes, and for reviewers to provide realistic and reasonable critiques of reports that are focused on these processes. These guidelines are not meant to be a dogmatic set of rules, because the appropriateness of any assay largely depends on the question being asked and the system being used. Moreover, no individual assay is perfect for every situation, calling for the use of multiple techniques to properly monitor autophagy in each experimental setting. Finally, several core components of the autophagy machinery have been implicated in distinct autophagic processes (canonical and noncanonical autophagy), implying that genetic approaches to block autophagy should rely on targeting two or more autophagy-related genes that ideally participate in distinct steps of the pathway. Along similar lines, because multiple proteins involved in autophagy also regulate other cellular pathways including apoptosis, not all of them can be used as a specific marker for bona fide autophagic responses. Here, we critically discuss current methods of assessing autophagy and the information they can, or cannot, provide. Our ultimate goal is to encourage intellectual and technical innovation in the field
    corecore